Goto

Collaborating Authors

 Uppsala


'I'll key your car': ChatGPT can become abusive when fed real-life arguments, study finds

The Guardian

In some cases, ChatGPT's outputs went beyond those of the human participants, including personalised insults and explicit threats. In some cases, ChatGPT's outputs went beyond those of the human participants, including personalised insults and explicit threats. 'I'll key your car': ChatGPT can become abusive when fed real-life arguments, study finds ChatGPT can escalate into abusive and even threatening language when drawn into prolonged, human-style conflict, according to a new study. Researchers tested how large language models (LLMs) responded to sustained hostility by feeding ChatGPT exchanges from real-life arguments and tracking how its behaviour changed over time. One expert not connected with the study described it as "one of the most interesting ever done into AI language and pragmatics Dr Vittorio Tantucci, who co-authored the research paper with Prof Jonathan Culpeper at Lancaster University, said their research found AI mirrored the dynamics of real-world disputes.